Affinity Scheduling in Staged Server Architectures (CMU-CS-02-113)

نویسندگان

  • Stavros Harizopoulos
  • Anastassia Ailamaki
چکیده

Modern servers typically process request streams by assigning a worker thread to a request, and rely on a round robin policy for context-switching. Although this programming paradigm is intuitive, it is oblivious to the execution state and ignores each software module’s affinity to the processor caches. As a result, resumed threads of execution suffer additional delays due to conflict and compulsory misses while populating the caches with their evicted working sets. Alternatively, the staged programming paradigm divides computation into stages and allows for stage-based (rather than request thread-based) cohort scheduling that improves module affinity. This technical report introduces (a) four novel cohort scheduling techniques for staged software servers that follow a “production-line” model of operation, and (b) a mathematical framework to methodically quantify the performance trade-offs when using these techniques. Our Markov chain analysis of one of the scheduling techniques matches the simulation results. Using our model on a staged database server, we found that the proposed policies exploit data and instruction locality for a wide range of workload parameter values and outperform traditional techniques such as FCFS and processor-sharing. Consequently, our results justify the restructuring of a wide class of software servers to incorporate the staged programming paradigm. Email: {stavros, natassa}@cs.cmu.edu Stavros Harizopoulos is partially sponsored by a Lilian Voudouri Foundation Fellowship and gratefully acknowledges their support. Affinity Scheduling in Staged Server Architectures Stavros Harizopoulos and Anastassia Ailamaki March 2002 CMU-CS-02-113

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Affinity Scheduling in Staged Server Architectures

Modern servers typically process request streams by assigning a worker thread to a request, and rely on a round robin policy for context-switching. Although this programming paradigm is intuitive, it is oblivious to the execution state and ignores each software module’s affinity to the processor caches. As a result, resumed threads of execution suffer additional delays due to conflict and compu...

متن کامل

Co-Scheduling of Disk Head Time in Cluster-Based Storage (CMU-PDL-08-113)

Disk timeslicing is a promising technique for storage performance insulation. To work with cluster-based storage, however, timeslices associated with striped data must be co-scheduled on the corresponding servers. This paper describes algorithms for determining global timeslice schedules and mechanisms for coordinating the independent server activities. Experiments with a prototype show that, c...

متن کامل

A Status Report on Research in Transparent Informed Prefetching (CMU-CS-93-113)

This paper focuses on extending the power of caching and prefetching to reduce file read latencies by exploiting application level hints about future I/O accesses. We argue that systems that disclose high-level knowledge can transfer optimization information across module boundaries in a manner consistent with sound software engineering principles. Such Transparent Informed Prefetching (TIP) sy...

متن کامل

Using Cohort-Scheduling to Enhance Server Performance

A server application is commonly organized as a collection of concurrent threads, each of which executes the code necessary to process a request. This software architecture, which causes frequent control transfers between unrelated pieces of code, decreases instruction and data locality, and consequently reduces the effectiveness of hardware mechanisms such as caches, TLBs, and branch predictor...

متن کامل

A Case for Network-Attached Secure Disks (CMU-CS-96-142)

By providing direct data transfer between storage and client, network-attached storage devices have the potential to improve scalability (by removing the server as a bottleneck) and performance (through network striping and shorter data paths). Realizing the technology’s full potential requires careful consideration across a wide range of file system, networking and security issues. To address ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015